105 research outputs found

    Point-light biological motion perception activates human premotor cortex

    Get PDF
    Motion cues can be surprisingly powerful in defining objects and events. Specifically, a handful of point-lights attached to the joints of a human actor will evoke a vivid percept of action when the body is in motion. The perception of point-light biological motion activates posterior cortical areas of the brain. On the other hand, observation of others' actions is known to also evoke activity in motor and premotor areas in frontal cortex. In the present study, we investigated whether point-light biological motion animations would lead to activity in frontal cortex as well. We performed a human functional magnetic resonance imaging study on a high-field-strength magnet and used a number of methods to increase signal, as well as cortical surface-based analysis methods. Areas that responded selectively to point-light biological motion were found in lateral and inferior temporal cortex and in inferior frontal cortex. The robust responses we observed in frontal areas indicate that these stimuli can also recruit action observation networks, although they are very simplified and characterize actions by motion cues alone. The finding that even point-light animations evoke activity in frontal regions suggests that the motor system of the observer may be recruited to "fill in" these simplified displays

    A computational proof of locality in entanglement

    Full text link
    In this paper the design and proof of concept (POC) coding of a local hidden variables computer model is presented. The program violates the Clauser, Horne, Shimony and Holt inequality |CHSH| 2\leq 2. In our numerical experiment, we find with our local computer program, CHSH 1+2\approx 1 + \sqrt{2}

    Visual tests predict dementia risk in Parkinson's disease

    Get PDF
    OBJECTIVE To assess the role of visual measures and retinal volume to predict the risk of Parkinson disease (PD) dementia. METHODS In this cohort study, we collected visual, cognitive, and motor data in people with PD. Participants underwent ophthalmic examination, retinal imaging using optical coherence tomography, and visual assessment including acuity and contrast sensitivity and high-level visuoperception measures of skew tolerance and biological motion. We assessed the risk of PD dementia using a recently described algorithm that combines age at onset, sex, depression, motor scores, and baseline cognition. RESULTS One hundred forty-six people were included in the study (112 with PD and 34 age-matched controls). The mean disease duration was 4.1 (±2·5) years. None of these participants had dementia. Higher risk of dementia was associated with poorer performance in visual measures (acuity: ρ = 0.29, p = 0.0024; contrast sensitivity: ρ = −0.37, p < 0.0001; skew tolerance: ρ = −0.25, p = 0.0073; and biological motion: ρ = −0.26, p = 0.0054). In addition, higher risk of PD dementia was associated with thinner retinal structure in layers containing dopaminergic cells, measured as ganglion cell layer (GCL) and inner plexiform layer (IPL) thinning (ρ = −0.29, p = 0.0021; ρ = −0.33, p = 0.00044). These relationships were not seen for the retinal nerve fiber layer that does not contain dopaminergic cells and were not seen in unaffected controls. CONCLUSION Visual measures and retinal structure in dopaminergic layers were related to risk of PD dementia. Our findings suggest that visual measures and retinal GCL and IPL volumes may be useful to predict the risk of dementia in PD

    Assessing cognitive dysfunction in Parkinson’s: An online tool to detect visuo-perceptual deficits

    Get PDF
    BACKGROUND: People with Parkinson’s disease (PD) who develop visuo-perceptual deficits are at higher risk of dementia, but we lack tests that detect subtle visuoperceptual deficits and can be performed by untrained personnel. Hallucinations are associated with cognitive impairment and typically involve perception of complex objects. Changes in object perception may therefore be a sensitive marker of visuo-perceptual deficits in PD. Objective: We developed an online platform to test visuo-perceptual function. We hypothesised that (1) visuo-perceptual deficits in PD could be detected using online tests, (2) object perception would be preferentially affected, and (3) these deficits would be caused by changes in perception rather than response bias. METHODS: We assessed 91 people with PD and 275 controls. Performance was compared using classical frequentist statistics. We then fitted a hierarchical Bayesian signal detection theory model to a subset of tasks. RESULTS: People with PD were worse than controls at object recognition, showing no deficits in other visuoperceptual tests. Specifically, they were worse at identifying skewed images (P <.0001); at detecting hidden objects (P 5.0039); at identifying objects in peripheral vision (P <.0001); and at detecting biological motion (P 5.0065). In contrast, people with PD were not worse at mental rotation or subjective size perception. Using signal detection modelling, we found this effect was driven by change in perceptual sensitivity rather than response bias. CONCLUSIONS: Online tests can detect visuo-perceptual defi- cits in people with PD, with object recognition particularly affected. Ultimately, visuo-perceptual tests may be developed to identify at-risk patients for clinical trials to slow PD dementia

    A Bayesian explanation of the 'Uncanny Valley' effect and related psychological phenomena

    Get PDF
    There are a number of psychological phenomena in which dramatic emotional responses are evoked by seemingly innocuous perceptual stimuli. A well known example is the ‘uncanny valley’ effect whereby a near human-looking artifact can trigger feelings of eeriness and repulsion. Although such phenomena are reasonably well documented, there is no quantitative explanation for the findings and no mathematical model that is capable of predicting such behavior. Here I show (using a Bayesian model of categorical perception) that differential perceptual distortion arising from stimuli containing conflicting cues can give rise to a perceptual tension at category boundaries that could account for these phenomena. The model is not only the first quantitative explanation of the uncanny valley effect, but it may also provide a mathematical explanation for a range of social situations in which conflicting cues give rise to negative, fearful or even violent reactions

    Impaired perception of facial motion in autism spectrum disorder

    Get PDF
    Copyright: © 2014 O’Brien et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.This article has been made available through the Brunel Open Access Publishing Fund.Facial motion is a special type of biological motion that transmits cues for socio-emotional communication and enables the discrimination of properties such as gender and identity. We used animated average faces to examine the ability of adults with autism spectrum disorders (ASD) to perceive facial motion. Participants completed increasingly difficult tasks involving the discrimination of (1) sequences of facial motion, (2) the identity of individuals based on their facial motion and (3) the gender of individuals. Stimuli were presented in both upright and upside-down orientations to test for the difference in inversion effects often found when comparing ASD with controls in face perception. The ASD group’s performance was impaired relative to the control group in all three tasks and unlike the control group, the individuals with ASD failed to show an inversion effect. These results point to a deficit in facial biological motion processing in people with autism, which we suggest is linked to deficits in lower level motion processing we have previously reported

    The Second-Agent Effect: Communicative Gestures Increase the Likelihood of Perceiving a Second Agent

    Get PDF
    Background: Beyond providing cues about an agent’s intention, communicative actions convey information about the presence of a second agent towards whom the action is directed (second-agent information). In two psychophysical studies we investigated whether the perceptual system makes use of this information to infer the presence of a second agent when dealing with impoverished and/or noisy sensory input. Methodology/Principal Findings: Participants observed point-light displays of two agents (A and B) performing separate actions. In the Communicative condition, agent B’s action was performed in response to a communicative gesture by agent A. In the Individual condition, agent A’s communicative action was replaced with a non-communicative action. Participants performed a simultaneous masking yes-no task, in which they were asked to detect the presence of agent B. In Experiment 1, we investigated whether criterion c was lowered in the Communicative condition compared to the Individual condition, thus reflecting a variation in perceptual expectations. In Experiment 2, we manipulated the congruence between A’s communicative gesture and B’s response, to ascertain whether the lowering of c in the Communicative condition reflected a truly perceptual effect. Results demonstrate that information extracted from communicative gestures influences the concurrent processing of biological motion by prompting perception of a second agent (second-agent effect). Conclusions/Significance: We propose that this finding is best explained within a Bayesian framework, which gives

    Sensitivity to differences in the motor origin of drawings:from human to robot

    Get PDF
    This study explores the idea that an observer is sensitive to differences in the static traces of drawings that are due to differences in motor origin. In particular, our aim was to test if an observer is able to discriminate between drawings made by a robot and by a human in the case where the drawings contain salient kinematic cues for discrimination and in the case where the drawings only contain more subtle kinematic cues. We hypothesized that participants would be able to correctly attribute the drawing to a human or a robot origin when salient kinematic cues are present. In addition, our study shows that observers are also able to detect the producer behind the drawings in the absence of these salient kinematic cues. The design was such that in the absence of salient kinematic cues, the drawings are visually very similar, i.e. only differing in subtle kinematic differences. Observers thus had to rely on these subtle kinematic differences in the line trajectories between drawings. However, not only motor origin (human versus robot) but also motor style (natural versus mechanic) plays a role in attributing a drawing to the correct producer, because participants scored less high when the human hand draws in a relatively mechanical way. Overall, this study suggests that observers are sensitive to subtle kinematic differences between visually similar marks in drawings that have a different motor origin. We offer some possible interpretations inspired by the idea of "motor resonance''

    How Bodies and Voices Interact in Early Emotion Perception

    Get PDF
    Successful social communication draws strongly on the correct interpretation of others' body and vocal expressions. Both can provide emotional information and often occur simultaneously. Yet their interplay has hardly been studied. Using electroencephalography, we investigated the temporal development underlying their neural interaction in auditory and visual perception. In particular, we tested whether this interaction qualifies as true integration following multisensory integration principles such as inverse effectiveness. Emotional vocalizations were embedded in either low or high levels of noise and presented with or without video clips of matching emotional body expressions. In both, high and low noise conditions, a reduction in auditory N100 amplitude was observed for audiovisual stimuli. However, only under high noise, the N100 peaked earlier in the audiovisual than the auditory condition, suggesting facilitatory effects as predicted by the inverse effectiveness principle. Similarly, we observed earlier N100 peaks in response to emotional compared to neutral audiovisual stimuli. This was not the case in the unimodal auditory condition. Furthermore, suppression of beta–band oscillations (15–25 Hz) primarily reflecting biological motion perception was modulated 200–400 ms after the vocalization. While larger differences in suppression between audiovisual and audio stimuli in high compared to low noise levels were found for emotional stimuli, no such difference was observed for neutral stimuli. This observation is in accordance with the inverse effectiveness principle and suggests a modulation of integration by emotional content. Overall, results show that ecologically valid, complex stimuli such as joined body and vocal expressions are effectively integrated very early in processing
    corecore